Skip to content

Conversation

@eafif
Copy link

@eafif eafif commented Jan 2, 2025

One use case is for RAG applications, we can pass a system prompt to the LLM to instruct to only return a response from the provided context and not make up an answer if the context is blank.

@eafif eafif changed the title Ability to pass a system prompt when for chat completion Ability to pass a system prompt for chat completion Jan 2, 2025
@andreibondarev andreibondarev requested a review from Copilot April 17, 2025 18:20
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 17 out of 17 changed files in this pull request and generated 1 comment.

# @param system_prompt [String] Content of the prompt to send as "system"
# @yield [String] Stream responses back one String at a time
# @return [String] The answer to the question
def generate_messages_and_chat(question: , context: , system_prompt: nil, &block)
Copy link

Copilot AI Apr 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method signature appears to include extra commas after the keywords 'question:' and 'context:'. It should be defined as 'def generate_messages_and_chat(question:, context:, system_prompt: nil, &block)' to avoid a syntax error.

Suggested change
def generate_messages_and_chat(question: , context: , system_prompt: nil, &block)
def generate_messages_and_chat(question:, context:, system_prompt: nil, &block)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant